Goto

Collaborating Authors

 markov logic network




New Liftable Classes for First-Order Probabilistic Inference

Seyed Mehran Kazemi, Angelika Kimmig, Guy Van den Broeck, David Poole

Neural Information Processing Systems

Statistical relational models provide compact encodings of probabilistic dependencies in relational domains, but result in highly intractable graphical models. The goal of lifted inference is to carry out probabilistic inference without needing to reason about each individual separately, by instead treating exchangeable, undistinguished objects as a whole. In this paper, we study the domain recursion inference rule, which, despite its central role in early theoretical results on domain-lifted inference, has later been believed redundant.





Scaling-up Importance Sampling for Markov Logic Networks

Neural Information Processing Systems

Markov Logic Networks (MLNs) are weighted first-order logic templates for generating large (ground) Markov networks. Lifted inference algorithms for them bring the power of logical inference to probabilistic inference. These algorithms operate as much as possible at the compact first-order level, grounding or propositionalizing the MLN only as necessary. As a result, lifted inference algorithms can be much more scalable than propositional algorithms that operate directly on the much larger ground network. Unfortunately, existing lifted inference algorithms suffer from two interrelated problems, which severely affects their scalability in practice. First, for most real-world MLNs having complex structure, they are unable to exploit symmetries and end up grounding most atoms (the grounding problem).


Domain size asymptotics for Markov logic networks

Koponen, Vera

arXiv.org Artificial Intelligence

A Markov logic network (MLN) determines a probability distribution on the set of structures, or ``possible worlds'', with an arbitrary finite domain. We study the properties of such distributions as the domain size tends to infinity. Three types of concrete examples of MLNs will be considered, and the properties of random structures with domain sizes tending to infinity will be studied: (1) Arbitrary quantifier-free MLNs over a language with only one relation symbol which has arity 1. In this case we give a pretty complete characterization of the possible limit behaviours of random structures. (2) An MLN that favours graphs with fewer triangles (or more generally, fewer k-cliques). As a corollary of the analysis a ``$δ$-approximate 0-1 law'' for first-order logic is obtained. (3) An MLN that favours graphs with fewer vertices with degree higher than a fixed (but arbitrary) number. The analysis shows that depending on which ``soft constraints'' an MLN uses the limit behaviour of random structures can be quite different, and the weights of the soft constraints may, or may not, have influence on the limit behaviour. It will also be demonstrated, using (1), that quantifier-free MLNs and lifted Bayesian networks (in a broad sense) are asymptotically incomparable, roughly meaning that there is a sequence of distributions on possible worlds with increasing domain sizes that can be defined by one of the formalisms but not even approximated by the other. In a rather general context it is also shown that on large domains the distribution determined by an MLN concentrates almost all its probability mass on a totally different part of the space of possible worlds than the uniform distribution does.


Improving Out-of-Distribution Detection with Markov Logic Networks

Kirchheim, Konstantin, Ortmeier, Frank

arXiv.org Artificial Intelligence

Out-of-distribution (OOD) detection is essential for ensuring the reliability of deep learning models operating in open-world scenarios. Current OOD detectors mainly rely on statistical models to identify unusual patterns in the latent representations of a deep neural network. This work proposes to augment existing OOD detectors with probabilistic reasoning, utilizing Markov logic networks (MLNs). MLNs connect first-order logic with probabilistic reasoning to assign probabilities to inputs based on weighted logical constraints defined over human-understandable concepts, which offers improved explainability. Through extensive experiments on multiple datasets, we demonstrate that MLNs can significantly enhance the performance of a wide range of existing OOD detectors while maintaining computational efficiency. Furthermore, we introduce a simple algorithm for learning logical constraints for OOD detection from a dataset and showcase its effectiveness.


Scaling-up Importance Sampling for Markov Logic Networks

Neural Information Processing Systems

Markov Logic Networks (MLNs) are weighted first-order logic templates for generating large (ground) Markov networks. Lifted inference algorithms for them bring the power of logical inference to probabilistic inference. These algorithms operate as much as possible at the compact first-order level, grounding or propositionalizing the MLN only as necessary. As a result, lifted inference algorithms can be much more scalable than propositional algorithms that operate directly on the much larger ground network. Unfortunately, existing lifted inference algorithms suffer from two interrelated problems, which severely affects their scalability in practice. First, for most real-world MLNs having complex structure, they are unable to exploit symmetries and end up grounding most atoms (the grounding problem).